|
The end-to-end principle is a classic design principle in computer networking,〔See Denning's Great Principles of Computing〕 first explicitly articulated in a 1981 conference paper by Saltzer, Reed, and Clark.〔 The end-to-end principle states that in a general-purpose network, application-specific functions ought to reside in the ''end hosts'' of a network rather than in ''intermediary nodes,'' provided that they can be implemented "completely and correctly" in the end hosts. The principle goes back to Paul Baran's 1960's work on obtaining reliability from unreliable parts; the basic intuition is that the payoffs from adding functions to a simple network quickly diminish, especially in cases where the end hosts have to re-implement those functions themselves for reasons of completeness and correctness. Furthermore, as implementing any specific function incurs some resource penalties regardless of whether the function is used or not, implementing a specific function ''in the network'' distributes these penalties among all clients, regardless of whether they use that function or not. The canonical example for the end-to-end principle is that of an arbitrarily reliable file transfer between two end-points in a distributed network of some nontrivial size:〔 The only way two end-points can obtain a completely reliable transfer is by transmitting and acknowledging a checksum for the entire data stream; in such a setting, lesser checksum and acknowledgement (ACK/NACK) protocols are justified only for the purpose of optimizing performance - they are useful to the vast majority of clients, but are not enough to fulfil the reliability requirement of this particular application. Thorough checksum is hence done at the end-points, and the network maintains a relatively low level of complexity and reasonable performance for all clients. In debates about network neutrality, a common interpretation of the end-to-end principle is that it implies a neutral, or "dumb" network. ==Basic content of the principle== The fundamental notion behind the end-to-end principle is that for two processes communicating with each other via some communication means, the ''reliability'' obtained from that means cannot be expected to be perfectly aligned with the reliability requirements of the processes. In particular, meeting or exceeding very high reliability requirements of communicating processes separated by networks of nontrivial size is more costly than obtaining the required degree of reliability by positive end-to-end acknowledgements and retransmissions (referred to as PAR or ARQ). Put differently, it is far easier and more tractable to obtain reliability beyond a certain margin by mechanisms in the ''end hosts'' of a network rather than in the ''intermediary nodes'', especially when the latter are beyond the control of and accountability to the former.〔The possibility of enforceable contractual remedies notwithstanding, it is impossible for any network in which intermediary resources are shared in a non-deterministic fashion to guarantee perfect reliability. At most, it may quote statistical performance averages.〕 An end-to-end PAR protocol with infinite retries can obtain arbitrarily high reliability from any network with a higher than zero probability of successfully transmitting data from one end to another. The end-to-end principle does not trivially extend to functions beyond end-to-end error control and correction. E.g., no straightforward end-to-end arguments can be made for communication parameters such as latency and throughput. Based on a personal communication with Saltzer (lead author of the original end-to-end paper〔) Blumenthal and Clark in a 2001 paper note:〔 The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Also, noteworthy formulations of the end-to-end principle can be found prior to the seminal 1981 Saltzer, Reed, and Clark paper.〔 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「End-to-end principle」の詳細全文を読む スポンサード リンク
|